Talk by Gordon Pipa, University of Osnabrueck / MPI Frankfurt. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley on Feb. 24, 2010.
Abstract. My research is focused on understanding how information processing and cognition can arise from the collective self-organization of elements interacting across many spatial and temporal scales. Here I will present, first an overview of my data driven research and, second a new principle mechanism for self organized information processing in complex neuronal networks. I will close my talk with an outlook to future research.
Part One: Available recording technologies for neuronal signals highlight that information processing involves neuronal dynamics on various spatial and temporal scales. To study such a multi-scale information processing we developed a new unifying time series analysis that infers how spiking activity of individual neurons integrate information from other spatial and temporal scales measured by e.g. the spiking activity of other neurons, or surrogates for neuronal mass activity such as local field potentials (LFP), and the electrocorticography (ECoG) that we record in experimental animals. Here I will present this new concept based on data recorded in awake monkeys in V1 watching laboratory stimuli and natural movies. We demonstrate, first, that the information encoding of individual neurons is modulated by the activity of the embedding neuronal populations on different temporal scales, and second that the encoding becomes more complex and the modulation of the encoding becomes stronger for complex natural stimuli compared to rather simple laboratory stimuli, such as gratings and moving bars. Our results show that understanding the neuronal code requires considering the brain as a complex multi-scale system.
Part Two: Reservoir computing originally introduced in the context of echo state or liquid state machines (LSM) has been proposed as a promising computational model for information processing in complex networks. Reservoir computing is a universal framework for computation, such as prediction, classification and memorization of information contained in time varying input streams. Here, I am going to present an extension of the original concept of the LSM that incorporates self organization based on neuronal plasticity. It considers two types, first spike timing dependent plasticity (STDP) that changes the synaptic strength and has been associated with sequence learning, and second intrinsic plasticity (IP) that changes the excitability of individual neurons to maintain homeostasis. We demonstrate that the combination of both types, first optimizes the information processing, second leads to self-organized criticality of the network dynamics, and third that intrinsic noise introduced by the intrinsic plasticity increases the robustness of information processing in a high noise regime.